I see no reason why such scenarios would be considered farcical. If anything, the normal one is the worse one and the one you suggest is something I could actually pull of right now if I had a motive to test something like that.
Oh really? You have an omega sitting around you can test game theory problems with? Omniscient super-intelligent being, maybe in your garage or something?
Seriously though, for the decision of the person who picks the box to influence the person who puts in the money, the person who puts in the money has to be able to simulate the thinking of the person who picks the box. That means you have to simulate the thinking of Omega. Given that omega is smart enough to simulate YOUR thinking in perfect detail, this is patently impossible.
The only reason for omega to two-box is if your decision is conditional on his decision, and much as he might wish it was, no amount of super-intelligence or super-rationality on his part is going to give you that magical insight into his mind. He knows whether you put the money in the box, and he knows that which box he picks has no influence on it.
I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.
I see no reason why such scenarios would be considered farcical. If anything, the normal one is the worse one and the one you suggest is something I could actually pull of right now if I had a motive to test something like that.
Oh really? You have an omega sitting around you can test game theory problems with? Omniscient super-intelligent being, maybe in your garage or something?
Seriously though, for the decision of the person who picks the box to influence the person who puts in the money, the person who puts in the money has to be able to simulate the thinking of the person who picks the box. That means you have to simulate the thinking of Omega. Given that omega is smart enough to simulate YOUR thinking in perfect detail, this is patently impossible.
The only reason for omega to two-box is if your decision is conditional on his decision, and much as he might wish it was, no amount of super-intelligence or super-rationality on his part is going to give you that magical insight into his mind. He knows whether you put the money in the box, and he knows that which box he picks has no influence on it.
1) Not in my garage, but this kind of thing doesn’t have a range limit.
2) For a sufficiently broad definition of “simulate”, then yes I can. That broad definition is sufficient.
3) Who are you to say what omega cannot do?
(Yea, I’ve used a bit of dark arts to make it sound more impressive than it actually is, but the point still stands.)
I take it your implication is that you could play the game with a superintelligent entity somewhere far in spacetime. If this is your plan, how exactly are you going to get the results back? Not really a test if you don’t get results.
No, it’s not. You might be able to guess that a superintelligence would like negentropy and be ambivalant toward long walks on the beach, but this kind of “simulating” would never, ever, ever, ever allow you to beat it at paper scissor rock. Predicting which square of a payoff matrix it will pick, when it is to the interest of the AI to pick a different square than you think it will, is a problem of the latter type.
This is a general purpose argument against all reasoning relating to superintelligences, and aids your argument no more than mine.